• Home
  • Introduction
  • Data Source
  • Data Visualization
  • Exploratory Data Analysis
  • ARMA/ARIMA/SARIMA Model
  • ARIMAX Model
  • Financial Time Series Model
  • Deep Learning for TS
  • Conclusion

Impact of Macroeconomic Factors on AT&T Stock Price

AT&T Inc. is a multinational telecommunications conglomerate that provides a range of services including wireless communication, internet, television, and digital entertainment. It is one of the largest communication service providers in the United States and has a significant presence in other countries as well. In recent years, AT&T Inc. has been focusing on expanding its 5G network and acquiring media companies to strengthen its digital entertainment offerings.

One of the top companies in the consumer staple sector is Procter & Gamble Co. (T), which produces a wide range of products including personal care, cleaning, and food and beverage items. To analyze the stock price behavior of Procter & Gamble, an ARIMAX+ARCH/GARCH model can be employed.

By using an ARIMAX+ARCH/GARCH model to analyze the stock price behavior of AT&T, we can gain insights into how macroeconomic factors impact the company’s performance. For example, an increase in GDP growth rate or a decrease in unemployment rate may lead to increased consumer spending and a rise in AT&T’s stock price. Conversely, an increase in inflation or interest rates may lead to a decrease in consumer spending and a decline in AT&T’s stock price.

Time series Plot

  • AT&T
  • Differentitaed Chart Series
  • GDP Growth
  • Inflation
  • Interest
  • Unemployment
Code
# get data
options("getSymbols.warning4.0"=FALSE)
options("getSymbols.yahoo.warning"=FALSE)


data.info = getSymbols("T",src='yahoo', from = '2010-01-01',to = "2023-03-01",auto.assign = FALSE)
data = getSymbols("T",src='yahoo', from = '2010-01-01',to = "2023-03-01")
df <- data.frame(Date=index(T),coredata(T))

# create Bollinger Bands
bbands <- BBands(T[,c("T.High","T.Low","T.Close")])

# join and subset data
df <- subset(cbind(df, data.frame(bbands[,1:3])), Date >= "2010-01-01")


# colors column for increasing and decreasing
for (i in 1:length(df[,1])) {
  if (df$T.Close[i] >= df$T.Open[i]) {
      df$direction[i] = 'Increasing'
  } else {
      df$direction[i] = 'Decreasing'
  }
}

i <- list(line = list(color = '#73C6B6'))
d <- list(line = list(color = '#7F7F7F'))

# plot candlestick chart

fig <- df %>% plot_ly(x = ~Date, type="candlestick",
          open = ~T.Open, close = ~T.Close,
          high = ~T.High, low = ~T.Low, name = "T",
          increasing = i, decreasing = d) 
fig <- fig %>% add_lines(x = ~Date, y = ~up , name = "B Bands",
            line = list(color = '#ccc', width = 0.5),
            legendgroup = "Bollinger Bands",
            hoverinfo = "none", inherit = F) 
fig <- fig %>% add_lines(x = ~Date, y = ~dn, name = "B Bands",
            line = list(color = '#ccc', width = 0.5),
            legendgroup = "Bollinger Bands", inherit = F,
            showlegend = FALSE, hoverinfo = "none") 
fig <- fig %>% add_lines(x = ~Date, y = ~mavg, name = "Mv Avg",
            line = list(color = '#C052B3', width = 0.5),
            hoverinfo = "none", inherit = F) 
fig <- fig %>% layout(yaxis = list(title = "Price"))

# plot volume bar chart
fig2 <- df 
fig2 <- fig2 %>% plot_ly(x=~Date, y=~T.Volume, type='bar', name = "T Volume",
          color = ~direction, colors = c('#73C6B6','#7F7F7F')) 
fig2 <- fig2 %>% layout(yaxis = list(title = "Volume"))

# create rangeselector buttons
rs <- list(visible = TRUE, x = 0.5, y = -0.055,
           xanchor = 'center', yref = 'paper',
           font = list(size = 9),
           buttons = list(
             list(count=1,
                  label='RESET',
                  step='all'),
             list(count=3,
                  label='3 YR',
                  step='year',
                  stepmode='backward'),
             list(count=1,
                  label='1 YR',
                  step='year',
                  stepmode='backward'),
             list(count=1,
                  label='1 MO',
                  step='month',
                  stepmode='backward')
           ))

# subplot with shared x axis
fig <- subplot(fig, fig2, heights = c(0.7,0.2), nrows=2,
             shareX = TRUE, titleY = TRUE)
fig <- fig %>% layout(title = paste("AT&T Stock Price: January 2010 - March 2023"),
         xaxis = list(rangeselector = rs),
         legend = list(orientation = 'h', x = 0.5, y = 1,
                       xanchor = 'center', yref = 'paper',
                       font = list(size = 10),
                       bgcolor = 'transparent'))

fig
Code
log(data.info$`T.Adjusted`) %>% diff() %>% chartSeries(theme=chartTheme('white'),up.col='#73C6B6')

Code
#import the data
gdp <- read.csv("DATA/RAW DATA/gdp-growth.csv")

#change date format
gdp$Date <- as.Date(gdp$DATE , "%m/%d/%Y")

#drop DATE column
gdp <- subset(gdp, select = -c(1))

#export the cleaned data
gdp_clean <- gdp
write.csv(gdp_clean, "DATA/CLEANED DATA/gdp_clean_data.csv", row.names=FALSE)

#plot gdp growth rate 
fig <- plot_ly(gdp, x = ~Date, y = ~value, type = 'scatter', mode = 'lines',line = list(color = 'rgb(240, 128, 128)'))
fig <- fig %>% layout(title = "U.S GPD Growth Rate: 2010 - 2022",xaxis = list(title = "Time"),yaxis = list(title ="GDP Growth Rate"))
fig
Code
#import the data
inflation_rate <- read.csv("DATA/RAW DATA/inflation-rate.csv")

#cleaning the data
#remove unwanted columns
inflation_rate_clean <- subset(inflation_rate, select = -c(1,HALF1,HALF2))

#convert the data to time series data
inflation_data_ts <- ts(as.vector(t(as.matrix(inflation_rate_clean))), start=c(2010,1), end=c(2023,2), frequency=12)

#export the data
write.csv(inflation_rate_clean, "DATA/CLEANED DATA/inflation_rate_clean_data.csv", row.names=FALSE)


#plot inflation rate 
fig <- autoplot(inflation_data_ts, ylab = "Inflation Rate", color="#FFA07A")+ggtitle("U.S Inflation Rate: January 2010 - February 2023")+theme_bw()
ggplotly(fig)
Code
#import the data
interest_data <- read.csv("DATA/RAW DATA/interest-rate.csv")

#change date format
interest_data$Date <- as.Date(interest_data$Date , "%m/%d/%Y")

#export the cleaned data
interest_clean_data <- interest_data
write.csv(interest_clean_data, "DATA/CLEANED DATA/interest_rate_clean_data.csv", row.names=FALSE)

#plot interest rate 
fig <- plot_ly(interest_data, x = ~Date, y = ~value, type = 'scatter', mode = 'lines',line = list(color='rgb(219, 112, 147)'))
fig <- fig %>% layout(title = "U.S Interest Rate: January 2010 - March 2023",xaxis = list(title = "Time"),yaxis = list(title ="Interest Rate"))
fig
Code
#import the data
unemployment_rate <- read.csv("DATA/RAW DATA/unemployment-rate.csv")

#change date format
unemployment_rate$Date <- as.Date(unemployment_rate$Date , "%m/%d/%Y")

# export the data
write.csv(unemployment_rate, "DATA/CLEANED DATA/unemployment_rate_clean_data.csv", row.names=FALSE)

#plot unemployment rate 
fig <- plot_ly(unemployment_rate, x = ~Date, y = ~Value, type = 'scatter', mode = 'lines',line = list(color = 'rgb(189, 183, 107)'))
fig <- fig %>% layout(title = "U.S Unemployment Rate: January 2010 - March 2023",xaxis = list(title = "Time"),yaxis = list(title ="Unemployment Rate"))
fig

From 2010 to 2014, AT&T’s stock price had a relatively steady increase, likely due to the company’s consistent revenue growth and strong financial performance. However, from 2014 to 2016, AT&T’s stock price experienced some volatility, which could be attributed to various factors such as changes in regulatory policies and increased competition in the telecommunication industry.

In 2016, AT&T’s stock price began to recover, likely due to the company’s acquisition of Time Warner, which expanded its portfolio of media and entertainment assets. This trend continued into 2017 and 2018, with AT&T’s stock price reaching an all-time high in mid-2018.

However, from late 2018 to early 2020, AT&T’s stock price experienced some decline, which could be attributed to factors such as the company’s high debt load and increasing competition from new players in the telecommunication industry.

The outbreak of the COVID-19 pandemic in early 2020 caused a brief dip in AT&T’s stock price, as investors were uncertain about the impact of the pandemic on the company’s operations and financial performance. Nevertheless, AT&T’s strong position in the telecommunication and media industries helped it to quickly rebound and continue its growth trend throughout 2020 and into early 2021.

Since early 2021, AT&T’s stock price has experienced some volatility, likely due to a combination of factors such as global economic uncertainty and fluctuations in consumer demand for the company’s products and services.

Since early 2020, AT&T’s stock price has experienced some volatility, likely due to a combination of factors such as global economic uncertainty and fluctuations in consumer demand for the company’s products.

As discussed before, the macroeconomic factors of GDP growth, inflation, interest rates, and unemployment rate are closely interrelated and play a crucial role in the overall health and stability of an economy. From 2010 to 2023, the global economy experienced a mix of ups and downs, with periods of strong GDP growth followed by slowdowns and recessions.

The second plot shows the first difference of the logarithm of the adjusted AT&T stock price. Taking the first difference removes any long-term trends and transforms the time series into a stationary process. From the plot, we can observe that the first difference of the logarithm of the AT&T stock price appears to be stationary, as the mean and variance are roughly constant over time.

Enodogenous and Exogenous Variables

  • Plot
  • Correlation Heatmap
  • CCF GDP
  • CCF Interest
  • CCF Inflation
  • CCF Unemployment
Code
numeric_data <- c("T.Adjusted","gdp", "interest", "inflation", "unemployment")
numeric_data <- final[, numeric_data]
normalized_data_numeric <- scale(numeric_data)
normalized_data <- ts(normalized_data_numeric, start = c(2010, 1), end = c(2021,10),frequency = 4)
ts_plot(normalized_data,
        title = "Normalized Time Series Data for T Stock and Macroeconomic Variables",
        Ytitle = "Normalized Values",
        Xtitle = "Year")
Code
# Get upper triangle of the correlation matrix
get_upper_tri <- function(cormat){
    cormat[lower.tri(cormat)]<- NA
    return(cormat)
}
cormat <- round(cor(normalized_data_numeric),2)

upper_tri <- get_upper_tri(cormat)

melted_cormat <- melt(upper_tri, na.rm = TRUE)
# Create a ggheatmap
ggheatmap <- ggplot(melted_cormat, aes(Var2, Var1, fill = value))+
 geom_tile(color = "white")+
 scale_fill_gradient2(low = "blue", high = "red", mid = "white", 
   midpoint = 0, limit = c(-1,1), space = "Lab", 
    name="Pearson\nCorrelation") +
  theme_minimal()+ # minimal theme
 theme(axis.text.x = element_text(angle = 45, vjust = 1, 
    size = 12, hjust = 1))+
 coord_fixed()

ggheatmap + 
geom_text(aes(Var2, Var1, label = value), color = "black", size = 4) +
theme(
  axis.title.x = element_blank(),
  axis.title.y = element_blank(),
  panel.grid.major = element_blank(),
  panel.border = element_blank(),
  panel.background = element_blank(),
  axis.ticks = element_blank(),
  legend.justification = c(1, 0),
  legend.position = c(0.6, 0.7),
  legend.direction = "horizontal")+
  guides(fill = guide_colorbar(barwidth = 7, barheight = 1,
                title.position = "top", title.hjust = 0.5))

Code
par(mfrow=c(1,1))
ccf_result <- ccf(normalized_data[, c("T.Adjusted")], normalized_data[, c("gdp")], 
    lag.max = 300,
    main = "Cros-Correlation Plot for T Stock Price and GDP Growth Rate ",
    ylab = "CCF")

Code
cat("The sum of cross correlation function is", sum(abs(ccf_result$acf)))
The sum of cross correlation function is 5.957981
Code
par(mfrow=c(1,1))
ccf_result <- ccf(normalized_data[, c("T.Adjusted")], normalized_data[, c("interest")], 
    lag.max = 300,
    main = "Cros-Correlation Plot for T Stock Price and Interest Rate",
    ylab = "CCF")

Code
cat("The sum of cross correlation function is", sum(abs(ccf_result$acf)))
The sum of cross correlation function is 12.0267
Code
par(mfrow=c(1,1))
ccf_result <- ccf(normalized_data[, c("T.Adjusted")], normalized_data[, c("inflation")], 
    lag.max = 300,
    main = "Cros-Correlation Plot for T Stock Price and Inflation Rate",
    ylab = "CCF")

Code
cat("The sum of cross correlation function is", sum(abs(ccf_result$acf)))
The sum of cross correlation function is 16.72642
Code
par(mfrow=c(1,1))
ccf_result <- ccf(normalized_data[, c("T.Adjusted")], normalized_data[, c("unemployment")], 
    lag.max = 300,
    main = "Cros-Correlation Plot for T Stock Priceand Unemployment Rate",
    ylab = "CCF")

Code
cat("The sum of cross correlation function is", sum(abs(ccf_result$acf)))
The sum of cross correlation function is 20.3674

The Normalized Time Series Data for Stock Price and Macroeconomic Variables plot shows the same variables as the first plot but has been normalized to a common range of 0 to 1 using the scale() function in R, which standardizes the variables to have a mean of 0 and a standard deviation of 1. The heatmap analysis of the normalized data reveals that inflation and unemployment rate exhibit strong positive correlations with the stock price indices, indicating that these variables may significantly influence stock price movements. On the other hand, weaker correlations were observed between the stock price indices and GDP and interest rates, suggesting that these variables may have less impact on stock price fluctuations. The cross-correlation feature plots confirm these findings, indicating that inflation and unemployment rate are more suitable feature variables for the ARIMAX model when predicting AT&T movements.

Final Exogenous variables: Macroeconomic indicators: Inflation rate and unemployment rate.

Enodogenous and Exogenous Variables Plot

  • Plot
  • Check the stationarity
Code
final_data <- final %>%dplyr::select( Date,T.Adjusted, inflation,unemployment)
numeric_data <- c("T.Adjusted", "inflation","unemployment")
numeric_data <- final_data[, numeric_data]
normalized_data_numeric <- scale(numeric_data)
normalized_numeric_df <- data.frame(normalized_data_numeric)
normalized_data_ts <- ts(normalized_data_numeric, start = c(2010, 1), frequency = 4)

autoplot(normalized_data_ts, facets=TRUE) +
  xlab("Year") + ylab("") +
  ggtitle("AT&T Stock Price, Inflation Rate and Unemployment Rate in USA 2010-2023")

Code
# Convert your multivariate time series data to a matrix
final_data_ts_multivariate <- as.matrix(normalized_data_ts)

# Check for stationarity using Phillips-Perron test
phillips_perron_test <- ur.pp(final_data_ts_multivariate)  
summary(phillips_perron_test)

################################## 
# Phillips-Perron Unit Root Test # 
################################## 

Test regression with intercept 


Call:
lm(formula = y ~ y.l1)

Residuals:
    Min      1Q  Median      3Q     Max 
-1.3311 -0.1828 -0.0604  0.1125  4.4755 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 0.005902   0.041678   0.142    0.888    
y.l1        0.845352   0.042100  20.079   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.5189 on 153 degrees of freedom
Multiple R-squared:  0.7249,    Adjusted R-squared:  0.7231 
F-statistic: 403.2 on 1 and 153 DF,  p-value: < 2.2e-16


Value of test-statistic, type: Z-alpha  is: -22.8539 

         aux. Z statistics
Z-tau-mu            0.1434

The results of the Phillips-Perron unit root test indicate strong evidence against the null hypothesis of a unit root, as the p-value for the coefficient of the lagged variable is less than the significance level of 0.05. This suggests that the variable y, which is being tested for stationarity, is likely stationary. Furthermore, the test statistic Z-tau-mu is 0.1434, which is smaller than the critical value of Z-alpha (-22.8539), providing further evidence of stationarity.

To determine whether the linear model requires an ARCH model, an ARCH test is conducted. The ACF and PACF plots are also used to identify suitable model values.

Model Fitting

  • Plot
  • ARCH Test
  • ACF Plot
  • PACF Plot
Code
normalized_numeric_df$T.Adjusted<-ts(normalized_numeric_df$T.Adjusted,star=decimal_date(as.Date("2010-01-01",format = "%Y-%m-%d")),frequency = 4)
normalized_numeric_df$inflation<-ts(normalized_numeric_df$inflation,star=decimal_date(as.Date("2010-01-01",format = "%Y-%m-%d")),frequency = 4)
normalized_numeric_df$unemployment<-ts(normalized_numeric_df$unemployment,star=decimal_date(as.Date("2010-01-01",format = "%Y-%m-%d")),frequency = 4)

fit <- lm(T.Adjusted ~ inflation+unemployment, data=normalized_numeric_df)
fit.res<-ts(residuals(fit),star=decimal_date(as.Date("2010-01-01",format = "%Y-%m-%d")),frequency = 4)
############## Then look at the residuals ############
returns <- fit.res  %>% diff()
autoplot(returns)+ggtitle("Linear Model Returns")

Code
byd.archTest <- ArchTest(fit.res, lags = 1, demean = TRUE)
byd.archTest

    ARCH LM-test; Null hypothesis: no ARCH effects

data:  fit.res
Chi-squared = 12.762, df = 1, p-value = 0.0003537
Code
ggAcf(returns) +ggtitle("ACF for returns")

Code
ggPacf(returns) +ggtitle("PACF for returns")

The ARCH LM-test was conducted with the null hypothesis of no ARCH effects. The test resulted in a chi-squared value of 12.762 with one degree of freedom, and a very low p-value of 0.0003537. This suggests strong evidence against the null hypothesis, indicating the presence of ARCH effects in the data.

Based on the ACF and PACF plots, it appears that there is some significant autocorrelation and partial autocorrelation at multiple lags, which suggests that an ARIMA model may not be sufficient to capture the time series behavior. Additionally, the values for p and q appear to be relatively high, with p = 0and q = 0 being suggested by the plots.

ARIMAX Model

  • Auto Arima Model
  • Auto Arima Residuals
Code
xreg <- cbind(Inflation = normalized_data_ts[, "inflation"],
              Unemployment = normalized_data_ts[, "unemployment"])
fit.auto <- auto.arima(normalized_data_ts[, "T.Adjusted"], xreg = xreg)
summary(fit.auto)
Series: normalized_data_ts[, "T.Adjusted"] 
Regression with ARIMA(0,1,0) errors 

Coefficients:
      Inflation  Unemployment
         -0.156       -0.3069
s.e.      0.150        0.0681

sigma^2 = 0.119:  log likelihood = -17.07
AIC=40.14   AICc=40.65   BIC=45.94

Training set error measures:
                     ME     RMSE       MAE       MPE     MAPE      MASE
Training set 0.03193586 0.334906 0.2596333 -52.59142 81.78774 0.5318549
                   ACF1
Training set -0.1104955
Code
checkresiduals(fit.auto)


    Ljung-Box test

data:  Residuals from Regression with ARIMA(0,1,0) errors
Q* = 1.767, df = 8, p-value = 0.9873

Model df: 0.   Total lags used: 8

Based on the results of the auto.arima function, the suggested best model is ARIMA(0,1,0) which is the same as the manual choosen arima model. So, we can then proceed to choose the best GARCH model using ARIMA(0,1,0) as the base model.

Squared Residuals

  • Plot
  • ACF Plot
  • PACF Plot
  • ACF Absolute
  • ARCH Model
Code
fit <- lm(T.Adjusted ~ inflation+unemployment, data=normalized_numeric_df)
fit.res<-ts(residuals(fit),star=decimal_date(as.Date("2010-01-01",format = "%Y-%m-%d")),frequency = 4)
fit <- Arima(fit.res,order=c(0,1,0))
res=fit$res
plot(res^2,main='Squared Residuals')

Code
acf(res^2,24, main = "ACF Residuals Square")

Code
pacf(res^2,24, main = "PACF Residuals Square")

Code
acf(abs(res), main = "ACF of Absolute Residuals")

Code
pacf(abs(res), main = "PACF of Absolute Residuals")

Code
summary(garchFit(~garch(1,0),res, trace=F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 0), data = res, trace = F) 

Mean and Variance Equation:
 data ~ garch(1, 0)
<environment: 0x133d2e208>
 [data = res]

Conditional Distribution:
 norm 

Coefficient(s):
       mu      omega     alpha1  
-0.014923   0.061675   1.000000  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)
mu      -0.01492     0.04410   -0.338    0.735
omega    0.06168     0.05922    1.041    0.298
alpha1   1.00000     0.91140    1.097    0.273

Log Likelihood:
 -24.54432    normalized:  -0.4720061 

Description:
 Tue Jan  9 20:57:55 2024 by user:  


Standardised Residuals Tests:
                                Statistic p-Value     
 Jarque-Bera Test   R    Chi^2  21.09339  2.628022e-05
 Shapiro-Wilk Test  R    W      0.9298323 0.004419964 
 Ljung-Box Test     R    Q(10)  14.55682  0.1490741   
 Ljung-Box Test     R    Q(15)  23.42799  0.075466    
 Ljung-Box Test     R    Q(20)  30.70192  0.05925058  
 Ljung-Box Test     R^2  Q(10)  3.989285  0.9478291   
 Ljung-Box Test     R^2  Q(15)  4.464593  0.9957709   
 Ljung-Box Test     R^2  Q(20)  7.691302  0.993721    
 LM Arch Test       R    TR^2   5.066871  0.955708    

Information Criterion Statistics:
     AIC      BIC      SIC     HQIC 
1.059397 1.171969 1.053211 1.102554 

From the squared residuals of the best ARIMA model, it can be observed that the ACF plot and PACF plot indicate that the residuals are not autocorrelated and are white noise, indicating a good fit of the model. Based on the squared residuals of the best ARIMA model, we can see that the ACF and PACF plots indicate that most of the values lie between the blue lines. Additionally, the p-value is 1 and q-value is 0. This suggests that the model has a good fit and that there is no significant autocorrelation or partial autocorrelation in the residuals.

The best models are ARIMA(0,1,0) and ARCH(1)

Best Model

  • ARIMA Model
  • GARCH Model
  • Volatility
Code
#fiting an ARIMA model to the Inflation variable
inflation_fit<-auto.arima(normalized_numeric_df$inflation) 
finflation<-forecast(inflation_fit)

#fitting an ARIMA model to the Unemployment variable
unemployment_fit<-auto.arima(normalized_numeric_df$unemployment) 
funemployment<-forecast(unemployment_fit)

# best model fit for forcasting
xreg <- cbind(Inflation = normalized_data_ts[, "inflation"],
              Unemployment = normalized_data_ts[, "unemployment"])



summary(arima.fit<-Arima(normalized_data_ts[, "T.Adjusted"],order=c(0,1,0),xreg=xreg),include.drift = TRUE)
Series: normalized_data_ts[, "T.Adjusted"] 
Regression with ARIMA(0,1,0) errors 

Coefficients:
      Inflation  Unemployment
         -0.156       -0.3069
s.e.      0.150        0.0681

sigma^2 = 0.119:  log likelihood = -17.07
AIC=40.14   AICc=40.65   BIC=45.94

Training set error measures:
                     ME     RMSE       MAE       MPE     MAPE      MASE
Training set 0.03193586 0.334906 0.2596333 -52.59142 81.78774 0.5318549
                   ACF1
Training set -0.1104955
Code
summary(final.fit <- garchFit(~garch(1,0), res,trace = F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 0), data = res, trace = F) 

Mean and Variance Equation:
 data ~ garch(1, 0)
<environment: 0x1346afcf8>
 [data = res]

Conditional Distribution:
 norm 

Coefficient(s):
       mu      omega     alpha1  
-0.014923   0.061675   1.000000  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)
mu      -0.01492     0.04410   -0.338    0.735
omega    0.06168     0.05922    1.041    0.298
alpha1   1.00000     0.91140    1.097    0.273

Log Likelihood:
 -24.54432    normalized:  -0.4720061 

Description:
 Tue Jan  9 20:57:56 2024 by user:  


Standardised Residuals Tests:
                                Statistic p-Value     
 Jarque-Bera Test   R    Chi^2  21.09339  2.628022e-05
 Shapiro-Wilk Test  R    W      0.9298323 0.004419964 
 Ljung-Box Test     R    Q(10)  14.55682  0.1490741   
 Ljung-Box Test     R    Q(15)  23.42799  0.075466    
 Ljung-Box Test     R    Q(20)  30.70192  0.05925058  
 Ljung-Box Test     R^2  Q(10)  3.989285  0.9478291   
 Ljung-Box Test     R^2  Q(15)  4.464593  0.9957709   
 Ljung-Box Test     R^2  Q(20)  7.691302  0.993721    
 LM Arch Test       R    TR^2   5.066871  0.955708    

Information Criterion Statistics:
     AIC      BIC      SIC     HQIC 
1.059397 1.171969 1.053211 1.102554 
Code
ht <- final.fit@h.t #a numeric vector with the conditional variances (h.t = sigma.t^delta)

#############################
data=data.frame(final)
data$Date<-as.Date(data$Date,"%Y-%m-%d")


data2= data.frame(ht,data$Date)
ggplot(data2, aes(y = ht, x = data.Date)) + geom_line(col = '#73C6B6') + ylab('Conditional Variance') + xlab('Date')

From the ARIMA(0,1,0), we see that the training set error measures also suggest a good fit, with low mean absolute error, root mean squared error, and autocorrelation of the residuals. GATCH(1,0) model model is used to estimate the volatility of the standardized residuals of the previous regression model. The model includes a mean equation that estimates the mean of the residuals and a variance equation that models the conditional variance of the residuals. The coefficients of the mean equation suggest that the mean of the residuals is close to zero. The variance equation coefficients suggest that the conditional variance of the residuals is dependent on the past conditional variances and the past squared standardized residuals. The model’s the AIC, BIC, SIC, and HQIC values are all relatively low, indicating a good fit of the model. The standardized residuals tests indicate that the residuals are approximately normally distributed and that there is no significant autocorrelation in the residuals.

The volatility of the model seems high in 2020 but has decreased gradually in the past few months. This could indicate that the asset’s price was experiencing a lot of fluctuations in 2020, but the market has stabilized recently.

Model Diagnostics

  • Residuals
  • QQ Plot
  • Box Test
Code
fit2<-garch(res,order=c(1,0),trace=F)
checkresiduals(fit2) 

Code
qqnorm(fit2$residuals, pch = 1)
qqline(fit2$residuals, col = "blue", lwd = 2)

Code
Box.test (fit2$residuals, type = "Ljung")

    Box-Ljung test

data:  fit2$residuals
X-squared = 0.20159, df = 1, p-value = 0.6534

The ACF plot of the residuals shows all the values between the blue lines, which indicates that the residuals are not significantly autocorrelated. The range of values for the residual plot between -2 and 2 is considered acceptable. Additionally, the QQ plot of the residuals shows a linear plot on the line, which is another good indication that the residuals are normally distributed. The QQ plot is a valuable tool to assess if the residuals follow a normal distribution, and in this case, the plot suggests that the residuals do indeed follow a normal distribution.

The Box-Ljung test, a p-value of 0.6534 indicates that the model’s residuals are not significantly autocorrelated, meaning that the model has captured most of the information in the data. This result is good because it suggests that the model is a good fit for the data and has accounted for most of the underlying patterns in the data. Therefore, we can rely on the model’s predictions and use them to make informed decisions.

Forecast

Code
predict(final.fit, n.ahead = 5, plot=TRUE)

  meanForecast meanError standardDeviation lowerInterval upperInterval
1  -0.01492338  1.240661          1.240661     -2.446574      2.416727
2  -0.01492338  1.265273          1.265273     -2.494812      2.464966
3  -0.01492338  1.289415          1.289415     -2.542130      2.512283
4  -0.01492338  1.313113          1.313113     -2.588577      2.558731
5  -0.01492338  1.336391          1.336391     -2.634201      2.604355

The forecasted plot is based on the best model ARIMAX(0,1,0)+ARCH(1,0). This model takes into account the autoregressive and moving average components of the data, as well as the impact of exogenous variables on the time series. Additionally, the GARCH component of the model accounts for the volatility clustering in the data. Overall, this model is well-suited to make accurate predictions about future values of the time series.

Equation of the Model

The equation of the ARIMAX(0,1,0) model is:

\(Y(t) = Y(t-1) + \epsilon(t)\)

where \(Y(t)\) is the time series variable and \(\epsilon(t)\) is the error term.

The equation of the ARCH(1) model is:

\(\sigma_{t}^{2}=\alpha_{0}+\alpha_{1}\epsilon_{t-1}^{2}\)

where \(\sigma^2_t\) is the conditional variance at time \(t\), \(\alpha_0\) is a constant, \(\alpha_1\) and \(\beta_1\) are the parameters, and \(\epsilon_t\) is the error term.

The combined equation of the ARIMAX(1,1,1)+GARCH(1,1) model is:

\(Y(t) = Y(t-1) + \epsilon(t)\)

\(\sigma_{t}^{2}=\alpha_{0}+\alpha_{1}\epsilon_{t-1}^{2}\)